25 research outputs found
Upper and Lower Bounds on the Smoothed Complexity of the Simplex Method
The simplex method for linear programming is known to be highly efficient in
practice, and understanding its performance from a theoretical perspective is
an active research topic. The framework of smoothed analysis, first introduced
by Spielman and Teng (JACM '04) for this purpose, defines the smoothed
complexity of solving a linear program with variables and constraints
as the expected running time when Gaussian noise of variance is
added to the LP data. We prove that the smoothed complexity of the simplex
method is , improving the dependence on
compared to the previous bound of .
We accomplish this through a new analysis of the \emph{shadow bound}, key to
earlier analyses as well. Illustrating the power of our new method, we use our
method to prove a nearly tight upper bound on the smoothed complexity of
two-dimensional polygons.
We also establish the first non-trivial lower bound on the smoothed
complexity of the simplex method, proving that the \emph{shadow vertex simplex
method} requires at least pivot steps with high probability. A key
part of our analysis is a new variation on the extended formulation for the
regular -gon. We end with a numerical experiment that suggests this
analysis could be further improved.Comment: 41 pages, 5 figure
Geometric aspects of linear programming : shadow paths, central paths, and a cutting plane method
Most everyday algorithms are well-understood; predictions made theoretically
about them closely match what we observe in practice. This is not the case for
all algorithms, and some algorithms are still poorly understood on a theoretical level.
This is the case for many algorithms used for solving optimization problems from operations reserach.
Solving such optimization problems is essential in many industries and is done every day.
One important example of such optimization problems are Linear Programming problems.
There are a couple of different algorithms that are popular in practice,
among which is one which has been in use for almost 80 years.
Nonetheless, our theoretical understanding of these algorithms is limited.
This thesis makes progress towards a better understanding of these key algorithms
for lineair programming, among which are the simplex method, interior point methods,
and cutting plane methods
Smoothed analysis of the simplex method
In this chapter, we give a technical overview of smoothed analyses of the shadow vertex simplex method for linear programming (LP). We first review the properties of the shadow vertex simplex method and its associated geometry. We begin the smoothed analysis discussion with an analysis of the successive shortest path algorithm for the minimum-cost maximum-flow problem under objective perturbations, a classical instantiation of the shadow vertex simplex method. Then we move to general linear programming and give an analysis of a shadow vertex based algorithm for linear programming under Gaussian constraint perturbations
A scaling-invariant algorithm for linear programming whose running time depends only on the constraint matrix
Following the breakthrough work of Tardos in the bit-complexity model,
Vavasis and Ye gave the first exact algorithm for linear programming in the
real model of computation with running time depending only on the constraint
matrix. For solving a linear program (LP) , Vavasis and Ye developed a primal-dual
interior point method using a 'layered least squares' (LLS) step, and showed
that iterations suffice to solve (LP)
exactly, where is a condition measure controlling the size of
solutions to linear systems related to .
Monteiro and Tsuchiya, noting that the central path is invariant under
rescalings of the columns of and , asked whether there exists an LP
algorithm depending instead on the measure , defined as the
minimum value achievable by a column rescaling of ,
and gave strong evidence that this should be the case. We resolve this open
question affirmatively.
Our first main contribution is an time algorithm which
works on the linear matroid of to compute a nearly optimal diagonal
rescaling satisfying . This
algorithm also allows us to approximate the value of up to a
factor . As our second main contribution, we develop a
scaling invariant LLS algorithm, together with a refined potential function
based analysis for LLS algorithms in general. With this analysis, we derive an
improved iteration bound for
optimally solving (LP) using our algorithm. The same argument also yields a
factor improvement on the iteration complexity bound of the original
Vavasis-Ye algorithm
A simple method for convex optimization in the oracle model
We give a simple and natural method for computing approximately optimal solutions for minimizing a convex function f over a convex set K given by a separation oracle. Our method utilizes the Frank–Wolfe algorithm over the cone of valid inequalities of K and subgradients of f. Under the assumption that f is L-Lipschitz and that K contains a ball of radius r and is contained inside the origin centered ball of radius R, using O((RL)2ε2·R2r2) iterations and calls to the oracle, our main method outputs a point x∈ K satisfying f(x) ≤ ε+ min z∈Kf(z) . Our algorithm is easy to implement, and we believe it can serve as a useful alternative to existing cutting plane methods. As evidence towards this, we show that it compares favorably in terms of iteration counts to the standard LP based cutting plane method and the analytic center cutting plane method, on a testbed of combinatorial, semidefinite and machine learning instances
A nearly optimal randomized algorithm for explorable heap selection
Explorable heap selection is the problem of selecting the nth smallest value in a binary heap. The key values can only be accessed by traversing through the underlying infinite binary tree, and the complexity of the algorithm is measured by the total distance traveled in the tree (each edge has unit cost). This problem was originally proposed as a model to study search strategies for the branch-and-bound algorithm with storage restrictions by Karp, Saks and Widgerson (FOCS '86), who gave deterministic and randomized n⋅exp(O(logn−−−−√)) time algorithms using O(log(n)2.5) and O(logn−−−−√) space respectively. We present a new randomized algorithm with running time O(nlog(n)3) using O(logn) space, substantially improving the previous best randomized running time at the expense of slightly increased space usage. We also show an Ω(log(n)n/log(log(n))) for any algorithm that solves the problem in the same amount of space, indicating that our algorithm is nearly optimal
A simple method for convex optimization in the oracle model
We give a simple and natural method for computing approximately optimal solutions for minimizing a convex function f over a convex set K given by a separation oracle. Our method utilizes the Frank–Wolfe algorithm over the cone of valid inequalities of K and subgradients of f . Under the assumption that f is L-Lipschitz and that K contains a ball of radius r and is contained inside the origin centered ball of radius R, using O( (RL)^2/ε^2 · R^2/r^2 ) iterations and calls to the oracle, our main method outputs a point x ∈ K satisfying f (x) ≤ ε + minz∈K f (z). Our algorithm is easy to implement, and we believe it can serve as a useful alternative to existing cutting plane methods. As evidence towards this, we show that it compares favorably in terms of iteration counts to the standard LP based cutting plane method and the analytic center cutting plane method, on a testbed of combinatorial, semidefinite and machine learning instance
Asymptotic bounds on the combinatorial diameter of random polytopes
The combinatorial diameter of a polytope is the
maximum shortest path distance between any pair of vertices. In this paper, we
provide upper and lower bounds on the combinatorial diameter of a random
"spherical" polytope, which is tight to within one factor of dimension when the
number of inequalities is large compared to the dimension. More precisely, for
an -dimensional polytope defined by the intersection of i.i.d.\
half-spaces whose normals are chosen uniformly from the sphere, we show that
is and with high probability when .
For the upper bound, we first prove that the number of vertices in any fixed
two dimensional projection sharply concentrates around its expectation when
is large, where we rely on the bound on the
expectation due to Borgwardt [Math. Oper. Res., 1999]. To obtain the diameter
upper bound, we stitch these ``shadows paths'' together over a suitable net
using worst-case diameter bounds to connect vertices to the nearest shadow. For
the lower bound, we first reduce to lower bounding the diameter of the dual
polytope , corresponding to a random convex hull, by showing the
relation .
We then prove that the shortest path between any ``nearly'' antipodal pair
vertices of has length